Hadoop编程 在HDFS里新建文件并写入内容,以及输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package zq;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class Write {
public static void main(String[] args) throws IOException {
Configuration conf=new Configuration();
conf.set("fs.default.name", "hdfs://localhost:9000");
Path inFile =new Path("/user/hadoop/hadoopfile/t1");
FileSystem hdfs=FileSystem.get(conf);
FSDataOutputStream outputStream=hdfs.create(inFile);
outputStream.writeUTF("china cstor cstor china");
outputStream.flush();
outputStream.close();
}
}

输出HDFS里刚写入文件的内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package output;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class Read {
public static void main(String[] args) throws IOException {
Configuration conf=new Configuration();
conf.set("fs.default.name", "hdfs://localhost:9000");
Path inFile =new Path("/user/hadoop/hadoopfile/t1");
FileSystem hdfs=FileSystem.get(conf);
FSDataInputStream inputStream=hdfs.open(inFile);
System.out.println("myfile:"+inputStream.readUTF());
inputStream.close();
}
}

当前网速较慢或者你使用的浏览器不支持博客特定功能,请尝试刷新或换用Chrome、Firefox等现代浏览器